纵向脑磁共振成像(MRI)含有病理扫描的登记是由于组织外观变化而挑战,仍然是未解决的问题。本文介绍了第一脑肿瘤序列登记(Brats-Reg)挑战,重点是估计诊断患有脑弥漫性胶质瘤的同一患者的术前和后续扫描之间的对应关系。 Brats-Reg挑战打算建立可变形登记算法的公共基准环境。关联的数据集包括根据公共解剖模板,为每个扫描的大小和分辨率策划的DE识别的多机构多参数MRI(MPMRI)数据。临床专家在扫描内产生了广泛的标志标记点,描述了跨时域的不同解剖位置。培训数据以及这些地面真相注释将被释放给参与者来设计和开发他们的注册算法,而组织者将扣留验证和测试数据的注释,并用于评估参与者的集装箱化算法。每个所提交的算法都将使用几个度量来定量评估,例如中位绝对误差(MAE),鲁棒性和雅可比的决定因素。
translated by 谷歌翻译
通过进入肿瘤细胞浓度的空间分布,诊断患有脑肿瘤的患者的目前的治疗计划可显着受益。现有的诊断方式,例如磁共振成像(MRI),对比具有高细胞密度的井区域。然而,它们不会描绘低浓度的区域,这通常可以用作治疗后肿瘤的二次出现的来源。肿瘤生长的数值模拟通过提供肿瘤细胞的全部空间分布估计来补充成像信息。近年来,发表了一种基于医学形象的肿瘤建模的文献语料。它包括描述前向肿瘤生长模型的不同数学形式主义。除了旁边,开发了各种参数推断方案以进行高效的肿瘤模型个性化,即解决逆问题。然而,所有现有方法的统一缺点是模型个性化的时间复杂性,禁止建模潜在集成到临床环境中。在这项工作中,我们介绍了一种方法论从T1GD和Flair MRI医学扫描中介绍了推断脑肿瘤的特异性空间分布。作为\ Textit {Learn-Morph-推断}该方法按照广泛可用的硬件的分钟顺序实现实时性能,并且在不同复杂性的肿瘤模型中,计算时间稳定,例如反应 - 扩散和反应 - 平程 - 扩散模型。我们相信拟议的逆解决方案方法不仅弥合脑肿瘤个性化的临床翻译方式,而且也可以通过其他科学和工程领域来采用。
translated by 谷歌翻译
The monograph summarizes and analyzes the current state of development of computer and mathematical simulation and modeling, the automation of management processes, the use of information technologies in education, the design of information systems and software complexes, the development of computer telecommunication networks and technologies most areas that are united by the term Industry 4.0
translated by 谷歌翻译
This short report reviews the current state of the research and methodology on theoretical and practical aspects of Artificial Neural Networks (ANN). It was prepared to gather state-of-the-art knowledge needed to construct complex, hypercomplex and fuzzy neural networks. The report reflects the individual interests of the authors and, by now means, cannot be treated as a comprehensive review of the ANN discipline. Considering the fast development of this field, it is currently impossible to do a detailed review of a considerable number of pages. The report is an outcome of the Project 'The Strategic Research Partnership for the mathematical aspects of complex, hypercomplex and fuzzy neural networks' meeting at the University of Warmia and Mazury in Olsztyn, Poland, organized in September 2022.
translated by 谷歌翻译
Using robots in educational contexts has already shown to be beneficial for a student's learning and social behaviour. For levitating them to the next level of providing more effective and human-like tutoring, the ability to adapt to the user and to express proactivity is fundamental. By acting proactively, intelligent robotic tutors anticipate possible situations where problems for the student may arise and act in advance for preventing negative outcomes. Still, the decisions of when and how to behave proactively are open questions. Therefore, this paper deals with the investigation of how the student's cognitive-affective states can be used by a robotic tutor for triggering proactive tutoring dialogue. In doing so, it is aimed to improve the learning experience. For this reason, a concept learning task scenario was observed where a robotic assistant proactively helped when negative user states were detected. In a learning task, the user's states of frustration and confusion were deemed to have negative effects on the outcome of the task and were used to trigger proactive behaviour. In an empirical user study with 40 undergraduate and doctoral students, we studied whether the initiation of proactive behaviour after the detection of signs of confusion and frustration improves the student's concentration and trust in the agent. Additionally, we investigated which level of proactive dialogue is useful for promoting the student's concentration and trust. The results show that high proactive behaviour harms trust, especially when triggered during negative cognitive-affective states but contributes to keeping the student focused on the task when triggered in these states. Based on our study results, we further discuss future steps for improving the proactive assistance of robotic tutoring systems.
translated by 谷歌翻译
Sunquakes are seismic emissions visible on the solar surface, associated with some solar flares. Although discovered in 1998, they have only recently become a more commonly detected phenomenon. Despite the availability of several manual detection guidelines, to our knowledge, the astrophysical data produced for sunquakes is new to the field of Machine Learning. Detecting sunquakes is a daunting task for human operators and this work aims to ease and, if possible, to improve their detection. Thus, we introduce a dataset constructed from acoustic egression-power maps of solar active regions obtained for Solar Cycles 23 and 24 using the holography method. We then present a pedagogical approach to the application of machine learning representation methods for sunquake detection using AutoEncoders, Contrastive Learning, Object Detection and recurrent techniques, which we enhance by introducing several custom domain-specific data augmentation transformations. We address the main challenges of the automated sunquake detection task, namely the very high noise patterns in and outside the active region shadow and the extreme class imbalance given by the limited number of frames that present sunquake signatures. With our trained models, we find temporal and spatial locations of peculiar acoustic emission and qualitatively associate them to eruptive and high energy emission. While noting that these models are still in a prototype stage and there is much room for improvement in metrics and bias levels, we hypothesize that their agreement on example use cases has the potential to enable detection of weak solar acoustic manifestations.
translated by 谷歌翻译
Transfer learning on edge is challenging due to on-device limited resources. Existing work addresses this issue by training a subset of parameters or adding model patches. Developed with inference in mind, Inverted Residual Blocks (IRBs) split a convolutional layer into depthwise and pointwise convolutions, leading to more stacking layers, e.g., convolution, normalization, and activation layers. Though they are efficient for inference, IRBs require that additional activation maps are stored in memory for training weights for convolution layers and scales for normalization layers. As a result, their high memory cost prohibits training IRBs on resource-limited edge devices, and making them unsuitable in the context of transfer learning. To address this issue, we present MobileTL, a memory and computationally efficient on-device transfer learning method for models built with IRBs. MobileTL trains the shifts for internal normalization layers to avoid storing activation maps for the backward pass. Also, MobileTL approximates the backward computation of the activation layer (e.g., Hard-Swish and ReLU6) as a signed function which enables storing a binary mask instead of activation maps for the backward pass. MobileTL fine-tunes a few top blocks (close to output) rather than propagating the gradient through the whole network to reduce the computation cost. Our method reduces memory usage by 46% and 53% for MobileNetV2 and V3 IRBs, respectively. For MobileNetV3, we observe a 36% reduction in floating-point operations (FLOPs) when fine-tuning 5 blocks, while only incurring a 0.6% accuracy reduction on CIFAR10. Extensive experiments on multiple datasets demonstrate that our method is Pareto-optimal (best accuracy under given hardware constraints) compared to prior work in transfer learning for edge devices.
translated by 谷歌翻译
从随机实验获得的数据培训模型是做出良好决策的理想选择。但是,随机实验通常是耗时的,昂贵的,冒险的,不可行的或不道德的,决策者别无选择,只能依靠培训模型时在历史策略下收集的观察数据。这不仅为实践中的决策政策发挥了最佳作用,还为不同的数据收集协议对数据培训的各种政策的绩效的影响,或者在问题上的稳健性方面的稳健性,对问题的绩效提出了疑问诸如观察结果中的动作或奖励 - 特定延迟之类的特征。我们的目的是为了在LinkedIn优化销售渠道分配的问题回答此类问题,其中销售帐户(线索)需要分配给三个渠道之一,目的是在一段时间内最大程度地提高成功转换的数量。关键问题特征构成了观察分配结果的随机延迟,其分布既是通道和结果依赖性的。我们构建了一个离散的时间模拟,可以处理我们的问题功能并将其用于评估:a)基于历史规则的策略; b)有监督的机器学习政策(XGBOOST); c)多臂强盗(MAB)策略,在涉及的不同情况下:i)用于培训的数据收集(观察性与随机分组); ii)铅转换方案; iii)延迟分布。我们的仿真结果表明,Linucb是一种简单的mAB策略,始终优于其他策略,相对于基于规则的策略,实现了18-47%的提升
translated by 谷歌翻译
持续学习一系列任务是深度神经网络中的一个活跃领域。调查的主要挑战是灾难性遗忘或干扰以前任务的知识的现象。最近的工作调查了远期知识转移到新任务。向后转移以改善以前的任务中获得的知识的关注要少得多。通常,人们对知识转移如何有助于不断学习的任务有限。我们提出了一种在持续监督学习中进行知识转移的理论,该理论都考虑了前进和向后转移。我们旨在了解它们对越来越多知识的学习者的影响。我们得出这些转移机制中的每一种。这些界限对特定实现(例如深神经网络)是不可知的。我们证明,对于观察相关任务的持续学习者而言,前进和向后转移都可以随着观察到更多的任务而提高性能。
translated by 谷歌翻译
随着机器学习和系统社区努力通过自定义深度神经网络(DNN)加速器,多样的精度或量化水平以及模型压缩技术来实现更高的能源效率,因此需要设计空间探索框架,以结合量化意识的处理。在具有准确和快速的功率,性能和区域模型的同时,进入加速器设计空间。在这项工作中,我们提出了Quidam,这是一种高度参数化的量化量化DNN加速器和模型共探索框架。我们的框架可以促进对DNN加速器设计空间探索的未来研究,以提供各种设计选择,例如位精度,处理元素类型,处理元素的刮擦大小,全局缓冲区大小,总处理元素的数量和DNN配置。我们的结果表明,不同的精确度和处理元素类型会导致每个区域和能量性能方面的显着差异。具体而言,我们的框架标识了广泛的设计点,其中每个面积和能量的性能分别差异超过5倍和35倍。通过拟议的框架,我们表明,与最佳基于INT16的实施相比,轻巧的处理元素可在准确性结果上实现,每个区域的性能和能源改善高达5.7倍。最后,由于预先特征的功率,性能和区域模型的效率,Quidam可以将设计勘探过程加快3-4个数量级,因为它消除了每种设计的昂贵合成和表征的需求。
translated by 谷歌翻译